Motivated by the goal of endowing robots with a means for focusing attention in order to operate reliably in complex, uncertain, and time-varying environments, we consider how a robot can (i) determine which portions of its environment to pay attention to at any given point in time, (ii) infer changes in context (e.g., task or environment dynamics), and (iii) switch its attention accordingly. In this work, we tackle these questions by modeling context switches in a time-varying Markov decision process (MDP) framework. We utilize the theory of bisimulation-based state abstractions in order to synthesize mechanisms for paying attention to context-relevant information. We then present an algorithm based on Bayesian inference for detecting changes in the robot's context (task or environment dynamics) as it operates online, and use this to trigger switches between different abstraction-based attention mechanisms. Our approach is demonstrated on two examples: (i) an illustrative discrete-state tracking problem, and (ii) a continuous-state tracking problem implemented on a quadrupedal hardware platform. These examples demonstrate the ability of our approach to detect context switches online and robustly ignore task-irrelevant distractors by paying attention to context-relevant information.
translated by 谷歌翻译
大型ML型号和数据集已经需要使用多GPU系统进行分布式模型培训。为了利用多GPU系统提供的权力,消除GPU间通信中的瓶颈至关重要 - 互连异构性质的问题挑战。在这项工作中,我们呈现TACCL,这是用于大规模多GPU系统的集体通信原语的合成器。 TACCL将异形拓扑和输入大小进行编码为合成问题,以生成优化的通信算法。 TACCL建立在标准的NVIDIA集体通信库(NCCL)之上,允许它成为PYTORCH等框架中GPU通信的替代品,具有最小的变化。 TACCL为全球,AllToAll和ALLERDUCE等通信基元生成算法,该算法高达3美元的速度超过NCCL。使用TACCL的算法加快了专家模型内部混合物的端到端培训,以17 \%$。通过将优化问题分解成零件并利用多GPU拓扑中的对称性,TACCL在不到3分钟内合成高达80-GPU的集体,比其他基于综合的状态快至少两个数量级 - 艺术集体通信图书馆。
translated by 谷歌翻译
There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-ofthe-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator.The system is open sourced and in production use inside several major companies.
translated by 谷歌翻译